Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Mitchell Hashimoto says GitHub's outages and workflow failures have made the platform unsuitable for Ghostty's active development.
Ghostty, a modern GPU-accelerated terminal emulator developed by Mitchell Hashimoto, is transitioning its active development away from GitHub due to ongoing reliability issues that have disrupted daily workflows.
Hashimoto announced the decision in an emotional post titled "Ghostty Is Leaving GitHub," stating that the project will gradually eliminate its dependency on GitHub while maintaining the current repository as a read-only mirror. Further details about the new hosting platform will be provided in the coming months, as discussions continue with both commercial and open-source providers.
This decision is significant given Hashimoto's background. He is best known as the co-founder of HashiCorp (departed in 2023), the infrastructure automation company behind widely used tools such as Terraform, Vault, Consul, Nomad, Packer, and Vagrant, which are the de facto standard in DevOps circles today.
In his post, Hashimoto describes the decision as personally difficult rather than a result of casual dissatisfaction. He notes that he used the platform daily for over 18 years, which makes the current decision even harder.
"I'm GitHub user 1299, joined Feb 2008. Since then, I've opened GitHub every single day. Every day, multiple times per day, for over 18 years. Over half my life. A handful of exceptions in there (I'd love to see the data), but I can't imagine more than a week per year."
Hashimoto states he has recently been publicly critical of GitHub due to daily service failures. He kept a journal over the past month, marking each day when a GitHub outage negatively impacted his work, and notes that almost every day was affected.
"For the past month I've kept a journal where I put an "X" next to every date where a GitHub outage has negatively impacted my ability to work. Almost every day has an X. On the day I am writing this post, I've been unable to do any PR review for ~2 hours because there is a GitHub Actions outage".
According to Hashimoto, however, the issue is not with Git itself. He clarifies that the problem lies in the surrounding GitHub infrastructure, including issues, pull requests, GitHub Actions, and related collaboration workflows. For Ghostty, these failures have impacted both maintainers and the broader open-source community, prompting the decision to move away.
However, it is hard not to notice the strong disappointment in his words regarding the popular developer platform.
"It's not a fun place for me to be anymore. I want to be there but it doesn't want me to be there. I want to get work done and it doesn't want me to get work done. I want to ship software and it doesn't want me to ship software. I want it to be better, but I also want to code. And I can't code with GitHub anymore. I'm sorry. After 18 years, I've got to go."
Importantly, Ghostty will not be removed from GitHub immediately. The migration will occur incrementally, and the current GitHub repository will remain available as a read-only mirror. Hashimoto notes that his personal projects and other work will stay on GitHub for now, with Ghostty prioritized due to the significant impact of reliability issues.
What have been your experiences with GitHub - both positive and negative? Do you recommend any alternatives?
https://distrowatch.com/dwres.php?resource=showheadline&story=20175
A developer with the Devuan project has created a fork of the GTK2 toolkit with an aim to maintain it, provide fixes, and make it possible for older applications to remain compatible with the GTK toolkit. While GTK2 has not been maintained upstream for years, and it has been dropped from the latest versions of some distributions, it was the basis for many applications, not all of which have migrated to GTK3. The announcement thread has more information and the code has been published in Devuan's git repository.
This release also adds support for the LoongArch64 architecture on Debian 14 Forky/Sid and support for the Mageia 10 distribution.
Trinity Desktop Environment (TDE) R14.1.6 desktop environment has been released today for nostalgic KDE 3.5 users as the sixth maintenance release of the R14.1.x series, adding new features and enhancements.
Coming about five and a half months after Trinity Desktop Environment R14.1.5, the Trinity Desktop Environment R14.1.6 release introduces support for recent GNU/Linux distributions, including Ubuntu 26.04 LTS (Resolute Raccoon), Fedora Linux 44, and Mageia 10, as well as support for the LoongArch64 architecture on Debian 14 Forky/Sid.
Trinity Desktop Environment R14.1.6 also updates the available search engines and graphical UI, adds a "Go to Desktop" action to the Konqueror browser, adds an option for a 3D border to the Kicker application menu, and adds drag and drop support of snapshots into other apps from the KSnapshot screenshot utility.
In addition, this release adds "Compatibility options" and "Currency signs" options under Kxkb's "Miscellaneous options", improves the handling of special unicode characters in TQt (Trinity Qt), and improves arrow key and Page Up/Page Down navigation and scrollbar in the KCharSelect characters tool.
Furthermore, Trinity Desktop Environment R14.1.6 brings improvements to various TDE-branded icons, pictures, and artwork, removes the sloppy Flying Konqi wallpaper, adds support for XZ archives to TDE's KIO slave component, and adds support for Poppler 26.04 and later to TDE's graphics utilities.
TDE's TWin (Trinity Window Manager) has been improved as well in this release, with fixes to tiling of maximized windows and opacity-related issues, making using transparency easier and providing an overall better user experience.
Also improved was signature verification in KMail's encrypted emails, language translations in the KVIrc IRC client, and handling of special unicode characters in TQt (Trinity Qt). Moreover, TDE R14.1.6 adds filesystem type indication in the TDE Display Manager's Meta Info property page.
Of course, numerous bugs were fixed, so check out the full release notes for more details about the changes included in Trinity Desktop Environment R14.1.6, which you can download for Linux distros, as well as BSD and DilOS systems from the official website. Upgrading from Trinity Desktop Environment R14.1.5 should be straightforward.
https://dillo-browser.org/release/3.3.0/
https://dillo-browser.org/
Dillo is a fast and small graphical web browser with the following features:
- Multi-platform, running on Linux, BSD, MacOS, Windows (via Cygwin) and even Atari.
- Written in C and C++ with few dependencies.
- Implements its own real-time rendering engine.
- Low memory usage and fast rendering, even with large pages.
- Uses the fast and bloat-free FLTK GUI library.
- Support for HTTP, HTTPS, FTP and local files.
- Extensible with plugins written in any language (see the list of plugins).
- Is free software licensed with the GPLv3.
- Helps authors to comply with web standards by using the bug meter Bugmeter icon.
https://fedoramagazine.org/announcing-fedora-linux-44/
I'm excited to announce that Fedora Linux 44 is here! Keep reading to discover highlights of Fedora Linux 44, or if you are ready, just jump right in and give Fedora Linux 44 a try!
Thanks to everyone who helped!Thank you and congrats to everyone who has contributed to this release. And thanks to everyone who showed up for the virtual release party last Friday. We celebrated a little early this year, just after the go/no-go meeting made the release official. If you weren't able to join us live, you can watch the recording and hear about some of the great work from the contributors involved.
Looking to upgrade?
If you have an existing system, Upgrading Fedora Linux to a New Release is easy. In most cases, it's not very different from just rebooting for regular updates, except you'll have a little more time to grab a coffee.
Ready to Fresh Install?
If this is your first time running Fedora Linux, or if you just want to start fresh with Fedora, download the install media for our flagship Editions (Workstation, KDE Plasma Desktop, Cloud, Server, CoreOS, IoT), or one of our Atomic Desktops (Silverblue, Kinoite, Cosmic, Budgie, Sway), or alternate desktop options (like Cinnamon, Xfce, Sway, or others).
What's new?As usual with Fedora Linux, there are just too many individual changes and improvements to go over in detail. You'll want to take a look at the release notes for that.
Original Submission #1 Original Submission #2 Original Submission #3 Original Submission #4
The C64C Ultimate will be manufactured using the same tools as the original was back in 1986.
The creators of the C64 Ultimate, a recreation of the iconic '80s personal computer that uses an FPGA chip to accurately replicate the original, have announced a follow-up version that continues in its predecessor's footsteps. The original Commodore 64 first debuted in 1982 and was followed by the Commodore 64C in 1986, which was functionally nearly identical but introduced a slimmer case and a more modern color scheme. It's the same story for the new Commodore 64C Ultimate. It gives the C64 Ultimate a welcome facelift, but there's no new functionality.
To make the C64C an authentic recreation of the original – at least on the outside – the reborn Commodore reacquired the exact same injection tooling molds that were used to manufacture the original's plastic housing 40 years ago. The new C64C Ultimate even features faint semi-circular marks on its housing resulting from melted plastic cooling unevenly inside the molds; a sign of authenticity that would be overly-complicated to fake.
As with the C64 Ultimate, the new C64C Ultimate features upgrades like Wi-Fi, USB, and an HDMI port for connecting it to modern displays. But it also carries forward the same ports from the 1986 version of the computer and is compatible with its '80s-era peripherals like floppy disk and cassette drives. It's available for preorder now starting at $299.99 with shipping expected as early as September, while more premium versions that add upgrades like LED lighting, translucent case, and gold keycaps go up to $499.99.
Civil liberty concerns spur FAA to revise drone no-fly zones near ICE vehicles:
In January 2026, during the height of protests against immigration raids in Minneapolis, federal agents shot and killed 37-year-old Renee Good . Before even gathering all the facts , the Department of Homeland Security labeled the mother of three an "anti-ICE rioter" who "weaponized her vehicle against law enforcement" in an "act of domestic terrorism."
Days later, the feds announced a major expansion of "no-fly zones" in the name of national security. While such no-fly zones used to be about controlling aircraft, they now often focus on small drones. The expanded no-fly zones announced on January 16 prohibited such drones from flying within 3,000 lateral feet and 1,000 vertical feet of federal facilities.
But for the first time, the order extended no-fly zones to ground vehicles belonging to the Department of Homeland Security. Even while the vehicles were in motion. Even if they were unmarked. And even if their routes had not been announced.
This exceptionally ambiguous policy posed real danger to people like Rob Levine , a freelance photojournalist and commercial photographer in Minneapolis for nearly four decades. Since Levine got his remote-pilot certification and bought his first drone in 2016, he has flown a small fleet of DJI quadcopter drones to take aerial photographs and videos of Minnesota's rivers, bridges, and cities, along with crowds gathered for outdoor concerts and parades. More recently, he has documented Twin City residents protesting the increased presence of federal agents in their community.
Levine immediately stopped flying when he saw the no-fly notice. The notice said government agencies could shoot down or seize drones "deemed to pose a credible safety or security threat," and it warned of civil and even criminal penalties for drone operators.
"I saw what these federal agents were willing to do, the violence they were willing to visit upon even constitutional observers here in the Twin Cities who were just photographing what they were doing," Levine told Ars.
Good's killing had occurred just six blocks from his home. "It didn't take much imagination to think what they would do to somebody with a drone, and so for weeks I didn't go fly," he said.
A week after the no-fly zone warning, the situation in Minneapolis escalated further when Customs and Border Protection officers killed Alex Pretti , a 37-year-old intensive care nurse, after wrestling him to the ground and shooting him multiple times.
Levine wanted his drones back in the air. But when he sought guidance from the Federal Aviation Administration, the agency candidly acknowledged that the no-fly zone warning was "ambiguous" and "therefore, any flight carries the risk of inadvertent violation."
Could such a policy possibly be legal?
The FAA had previously only advised that drone pilots avoid flying near "mobile assets" operated by the Department of Defense and Department of Energy, such as naval warships and truck convoys transporting nuclear materials between US national labs. But the "notice to airmen" alert in January— NOTAM FDC 6/4375 —had created the equivalent of roving, 3,000-foot no-fly zones around federal agents' cars and other vehicles operating in cities and towns across the country. And it didn't just affect those trying to film federal agents. Because it was practically impossible to ensure compliance with the new flight restrictions, any drone pilot could be at risk during any flight.
"It created a whole lot of fear in the community," said Vic Moss , CEO and cofounder of the Drone Service Providers Alliance, a drone industry trade association based in Lakewood, Colorado. In a post on March 11 , Moss described the FAA flight restriction as posing an "impossible compliance problem" for drone operators, who could end up "ensnared inside a restricted zone with no way of knowing it."
Drone pilots in the United States must use apps such as Air Control to seek official permission to fly in controlled airspaces. Any drones larger than 0.55 pounds must be registered with the FAA and have a Remote ID module that can "squawk" the drone's identification and location at all times. That makes it easy for federal agents or authorities to see where drone operations are taking place. But the system provided no way for drone operators to avoid unmarked government vehicles in motion.
The no-fly zone restrictions were also exceptional in their length and scope. The FAA regularly issues temporary flight restrictions during natural disasters or to protect the airspace around government officials and sporting events such as professional baseball or football games . Most restrictions last just hours or days and cover specific geographic locations, according to the Electronic Frontier Foundation .
But the restrictions issued on January 16, 2026, would last until October 29, 2027—21 months—while covering many federal facilities and vehicles across the entire United States.
Given these unprecedented restrictions, the Electronic Frontier Foundation joined other members of the News Media Coalition—an international organization that includes more than 50 news organizations—in sending a letter to the FAA's Office of the Chief Counsel.
The letter detailed "significant concerns regarding the FAA's January 16, 2026 sweeping and unprecedented Temporary Flight Restriction." It described the flight restrictions as violating the First Amendment by making it more difficult to record law enforcement officers. The letter also argued that the policy's ambiguity violated the Fifth Amendment to the US Constitution, which guarantees the right to due process before being deprived of liberty or property by the government.
Back in Minnesota, Levine spent weeks looking for lawyers who could help him challenge the FAA flight restriction as a freelance photojournalist—but he was racing against a deadline. One law firm alerted him that he had only 60 days to file a petition regarding the FAA decision. But he couldn't find a law firm willing to back him.
"To me, this was an obviously unconstitutional rule by the FAA," Levine told Ars Technica. "Even when I was looking for a lawyer, I had a lot of sympathetic ears, but nobody offered to take the case or to even help me with it."
Levine eventually called a hotline for the Reporters Committee for Freedom of the Press , a nonprofit in Washington, DC, that offers free legal services. The organization took the case and filed a lawsuit, designated Levine v. FAA (26-1054), with the Court of Appeals for the DC Circuit on March 16.
They had barely beaten the petition deadline.
By March 16, it was common knowledge in the aviation industry that the FAA was aware of the issues and had prepared a revised version of its flight restriction notice, Moss said. But another federal agency was apparently holding up the revision. Many suspected that the agency responsible for the delay was the Department of Homeland Security (DHS).
"I think anybody with more than four synapses firing at the same time can realize that this was a DHS issue," Moss said.
A Department of Homeland Security spokesperson told Ars only that "DHS routinely coordinates with the FAA on airspace restrictions to support operational security and safety of the Department."
On April 10, Levine and his lawyers pressed ahead by filing an emergency motion seeking to temporarily suspend the FAA flight restriction until the court had a chance to review the case.
That may have expedited the government's next move. On April 15, the FAA removed the no-fly zones by replacing the sweeping flight restrictions with a "national security advisory" titled NOTAM FDC 6/2824 . The revised notice dropped all mentions of flight restrictions and criminal charges. It instead "advised" drone pilots to avoid flying near "covered mobile assets" belonging to the Department of Homeland Security and several other federal agencies.
The revised notice was intended to "clarify drone operations based on user feedback," according to an FAA statement shared with Ars. An FAA spokesperson confirmed that "the revised NOTAM removes the flight prohibition and instead advises pilots to use caution near protected operations while enabling federal security partners to assess and respond to potential threats."
Levine and his lawyers were pleased. "First and foremost, our goal was to get the restriction thrown out so that Rob [Levine] and other journalists could be up in the air again," said Grayson Clary , a staff attorney at the Reporters Committee for Freedom of the Press. "So on that front, we think this is already a victory."
But Clary still plans to press ahead with the lawsuit.
"We're cognizant that the FAA is doing this because they don't want to have to defend what they did here on the merits in front of the DC Circuit, and we are going to fight back on that tactical gamesmanship," Clary said. "We do plan to make clear to the DC Circuit that this shouldn't have happened in the first place."
The new FAA advisory wording is "a lot better than it was," but it still comes off as "too ambiguous," according to Moss at the Drone Service Providers Alliance. He suggested that the Department of Homeland Security could handle any potential drone concerns rather than making it an FAA issue.
"If there's somebody harassing them with a drone, then I think there's other ways that can be dealt with," he said.
The FAA advisory is also potentially problematic because it still creates a "chilling effect to dissuade people from taking photos and videos, particularly of immigration enforcement agents, from the air," said Sophia Cope , a senior staff attorney at the Electronic Frontier Foundation.
Like the earlier notice, the new advisory warns that federal agents can seize, damage, or destroy drones "deemed to pose a credible safety or security threat to covered mobile assets."
"The threats that [drones] present to the national security and mission of DHS are evolving, and the approaches to securing the locations and personnel of the Department must also evolve," the Department of Homeland Security spokesperson said. "We ask that the [drone] user community respect the security of DHS operations, personnel and facilities and refrain from operating in vicinity of known enforcement activities, and all federal facilities."
The FAA advisory cites three existing laws as giving the federal agencies authority to seize or destroy drone threats.
But those laws first require federal agencies to have performed risk-based assessments to identify specific drone threats to the covered assets. It's unclear whether agencies have done those assessments, Cope said, and therefore, "they're just disincentivizing people from engaging in lawful, First Amendment protected activity."
That chilling effect was very real for Levine while the initial flight restriction was in place. Hesitation cost him the chance to take aerial photos of protestors putting up roadblocks in his neighborhood to stop federal agents' vehicles toward the end of the US government's Operation Metro Surge . Even when a friend asked him to help take drone videos and photos of a performance art event on February 28, he had to think hard about the risks.
As he tells it, "I eventually just screwed up my courage, as little as I have, and said 'OK, I'm gonna do it.'"
Patches land for authencesn flaw enabling local privilege escalation
https://hackread.com/linux-kernel-vulnerability-copy-fail-full-root-access/
Developers of major Linux distributions have begun shipping patches to address a local privilege escalation (LPE) vulnerability arising from a logic flaw.
The newly disclosed LPE, dubbed Copy Fail (CVE-2026-31431), comes from a vulnerability in the Linux kernel's authencesn cryptographic template.
"An unprivileged local user can write four controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root," the writeup from security biz Theori explains.
The kernel reads the page cache when it loads a binary, so modifying the cached copy amounts to altering the binary for the purpose of program execution. But doing so doesn't trigger any defenses focused on file system events like inotify.
The proof of concept exploit is a 10-line, 732-byte Python script capable of editing a setuid binary to gain root on almost all Linux distributions released since 2017.
Copy Fail is similar to other LPE bugs such as Dirty Cow and Dirty Pipe, but its finders claim it doesn't require winning a race condition and it's more broadly applicable.
It's not remotely exploitable on its own – hence LPE – but if chained with a web RCE, malicious CI runner, or SSH compromise, it could be relevant to an external attacker. The bug is of most immediate concern to those using multi-tenant Linux systems, shared-kernel containers, or CI runners that execute untrusted code.
According to Theori, the vulnerability also represents a potential container escape primitive that could affect Kubernetes nodes, because the page cache is shared across the host.
Linux distros Debian, Ubuntu, and SUSE have issued patches for the problem, as have overseers of other distros.
Red Hat initially said it was going to defer the fix but later changed its
guidance to indicate it will go along with other distros and patch promptly.
The CVE has been rated High severity, 7.8 out of 10.
Theori researcher Taeyang Lee identified the vulnerability, with the help of the company's AI security scanning software, Xint Code.
The number of bug reports has surged in recent months, helped by AI-powered flaw-finders. Microsoft just reported the second largest number of patches ever.
Dustin Childs, head of threat awareness for Trend Micro's Zero Day Initiative, expects this is due to security teams using AI to hunt bugs. "There are many things we could speculate on to justify the size, but if Microsoft is like the other programs out there (including ours), they are likely seeing a rise in submissions found by AI tools," he wrote earlier this month.
AI-assisted vulnerability research recently prompted the Internet Bug Bounty (IBB) program to suspend awards until it can understand how to manage the growing volume of reports.
Apple wants to kill your Time Capsule, but they run NetBSD so they can't:
It seems like Apple is finally going to remove support for AFP from macOS, twelve years after first moving from AFP to SMB for its default network file-sharing technology. This change shouldn't impact most people, as it's highly unlikely you're using AFP for anything in 2026. Still, there is one small group of people to whom this change has an actual impact: owners of Apple's Time Capsule devices. Time Capsules only support AFP and SMB1, and with SMB1 being removed from macOS ages ago, and now AFP being on the chopping block as well, macOS 27 would render your Time Capsule more or less unusable.
It's important to note that the last Time Capsule sold by Apple, the fifth generation, was released in 2013, and the product line as a whole was discontinued in 2018. If you bought a Time Capsule in the twilight years of the line's availability, I think you have a genuine reason to be perturbed by Apple cutting you off from your product if you upgrade to macOS 27, but at least you have the option of keeping an older version of macOS around so you can keep interacting with your time Capsule. It still feels like a bit of a shitty move though, as those fifth generation models came with up to 3TB of storage, which can still serve as a solid NAS solution.
Thank your lucky stars, then, that open source can, as usual, come to the rescue when proprietary software vendors do what they always do and screw over their customers. Did you know every generation of Time Capsule actually runs NetBSD, and that it's trivially easy to add support for Samba 4 and SMB3 authentication to your Time Capsule, thereby extending its life expectancy considerably? TimeCapsuleSMB does exactly that.
If the setup completes successfully, your Time Capsule will run its own Samba 4 server, advertise itself over Bonjour (show up automatically in the "Network" folder on macOS), and accept authenticated SMB3 connections from macOS. You should then be able to open Finder, choose Connect to Server, and use a normal SMB URL instead of relying on Apple's legacy stack. You should also be able to use the disk for Time Machine backups.
↫ TimeCapsuleSMBIt's compatible with both NetBSD 4 and NetBSD 6-based Time Capsules, although you'll need to run a single SMB activation command every time a NetBSD 4-based Time Capsule reboots. This will also disable any AFP and SMB1 support, but that is kind of moot since those are exactly the technologies that don't and won't work anymore once macOS 27 is released. The installation is also entirely reversible if, for whatever reason, you want to undo the addition of Samba 4.
This whole saga is such an excellent example of why open source software protects users' rights, by design.
Google has signed a classified deal that allows the US Department of Defense to use its AI models for "any lawful government purpose," The Information reports. The agreement was reported less than a day after Google employees demanded CEO Sundar Pichai block the Pentagon from using its AI amid concerns that it would be used in "inhumane or extremely harmful ways."
If the agreement is confirmed, it would place Google alongside OpenAI and xAI, which have also made classified AI deals with the US government. Anthropic was also among that list until it was blacklisted by the Pentagon for refusing the Department of Defense's demands to remove weapon and surveillance-related guardrails from its AI models.
Citing a single anonymous source "with knowledge of the situation," The Information reports that the deal states that both parties have agreed that the search giant's AI systems shouldn't be used for domestic mass surveillance or autonomous weapons "without appropriate human oversight and control." But the contract also says it doesn't give Google "any right to control or veto lawful government operational decision-making," which would suggest the agreed restrictions are more of a pinky promise than legally binding obligations. The deal also requires Google to assist with making adjustments to its AI safety settings and filters at the government's request.
"We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security," a Google spokesperson said in a statement to The Information, adding that the new agreement is an amendment to its existing government deal. "We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight."
https://read.thecoder.cafe/p/linux-broke-postgresql
On April 3, 2026, Salvatore Dipietro, an engineer at AWS, posted a patch to the Linux kernel mailing list. The reason: on a 96-vCPU Graviton4 machine running Linux 7.0, PostgreSQL throughput had dropped to roughly half of what it produced on Linux 6.x. In this post, we will trace what changed in Linux 7.0, how PostgreSQL manages memory, and what role memory pages play in making the problem appear (or disappear). Get cozy, grab a coffee, and let's begin!
The Problem
Salvatore Dipietro ran pgbench (PostgreSQL's standard benchmarking tool) on a Graviton4 processor with 96 vCPUs. The workload was a benchmark doing simple updates at scale factor 8,470 (i.e., roughly a 847 million row table), simulating 1,024 clients and 96 threads. A serious, high-parallelism load designed to stress the system.
The results were striking. Linux 7.0 delivered roughly half the throughput of Linux 6.x on the same hardware and workload:
Linux 6.x: 98,565 transactions per second
Linux 7.0: 50,751 transactions per second
Colorado has led the US on legislation that ensures people can fix their stuff. Manufacturers tried to claw back that control but ultimately failed—for now:
A controversial bill in Colorado that would have undone some repair protections in the state has failed. The bill had been the target of right-to-repair advocates, who saw it as a bellwether for how tech companies might try to undo repair legislation more broadly in the US.
Colorado's landmark 2024 repair law, the Consumer Right to Repair Digital Electronic Equipment , went into effect in January 2026 and ensured access to tools and documentation people needed to modify and fix digital electronics such as phones, computers, and Wi-Fi routers. The new bill, SB26-090 , would have carved out an exception to those repair protections for "critical infrastructure," a loosely defined term that repair advocates worried could be applied to just about any technology.
SB26-090 was introduced during a Colorado Senate hearing on April 2 and was supported by lobbying efforts from companies such as Cisco and IBM. It passed that hearing unanimously. The bill then passed in the Colorado Senate on April 16. On Monday evening, the bill was discussed in a long, delayed hearing in the Colorado House's State, Civic, Military, and Veterans Affairs Committee. Dozens of supporters and detractors gave public comments. Finally, the bill was shot down in a 7-to-4 vote and classified as postponed indefinitely.
Danny Katz, executive director of the local nonprofit consumer advocacy group CoPIRG, says the battle was a group effort. Speaking against the bill were a cohort of repair advocates from organizations such as PIRG , Repair.org , iFixit , Consumer Reports , and local businesses and environmental groups like Blue Star Recyclers , Recycle Colorado , Environment Colorado , and GreenLatinos .
[...] Supporters of the bill, backed by companies like Cisco, had pointed to the potential for cybersecurity risks as their motivation for altering the law's language. If companies were required to make repair tools available to anyone, the theory goes, what's to stop bad actors from using those tools to reverse engineer critical technology like Internet routers? Withholding those tools, they posited, would make them less available to hackers who could misuse them. Advocates of the bill said that companies should be allowed to keep their secrets if it ensured security, though that argument starts to fall apart with a little scrutiny.
At one point in the hearing, Democrat Chad Clifford, a Colorado state representative and the House committee's vice chair who was also a prime sponsor of the bill, pointed to what appeared to be a reference to Cloudflare's very public use of a wall of lava lamps to help randomize Internet encryption, citing that as an example of the need for sensitive systems to be inscrutable to be secure.
"I don't know why anybody has to have lava lamps on a wall to keep the Chinese from getting into a network, but it's what they came up with that worked," Clifford said. "How they do that, I believe they should be able to keep it a secret, even in Colorado."
The problem with that argument, as cybersecurity experts pointed out during the hearing, is that the vast majority of hacks are not carried out via replacement parts or by taking apart individual machines. They're remote hacks, where the attacker makes changes in real time, and the people defending have to make changes on the fly without worrying about acquiring permission from the company that makes the equipment.
"There is no time," cybersecurity expert and white hat hacker Billy Rios said during the hearing. "It doesn't work that way."
Besides the cybersecurity argument, the other point of contention was the economics of angering the big tech companies that have invested in the state.
"They're not going to comply and give away the keys to their kingdom for the things that are securing billions of dollars of interest for their customers over the law that we passed," Clifford said. "What they're going to do is just not have commerce on those items here."
That argument didn't carry enough weight to change the vote in supporters' favor. By the end of the hearing, it was clear that everyone was exhausted and not entirely clear on how exactly the new bill and amendments would pan out.
"What are we really trying to do here?" said Colorado Representative Naquetta Ricks in her no vote at the end of the hearing. "Are we protecting just one company, or are we looking at really critical infrastructure? I'm not convinced."
Previously:
• Tech Companies Are Trying to Neuter Colorado's Landmark Right-to-Repair Law
• Right to Repair Laws Have Now Been Introduced in All 50 US States
An interesting essay about the issues with vibe coding ...
A marketing manager with no engineering background opens Cursor on Monday morning. By Wednesday afternoon, she has a working customer-facing app. It looks polished. It performs the core task. She demos it to her VP, who forwards it to their CMO, who then shows it in the executive staff meeting as evidence that the team is "moving at AI speed."
By Friday, it is in front of customers.
No one asked who owned the decision to ship it. No one tested it against the conditions it would actually face. No one had the cultural standing to say this looks great, and we are not putting it into production. The prototype became a product because the organization had no system for telling the difference.
I watched a version of this scenario play out recently in a boardroom. A senior executive demoed an AI-built internal tool. The room admired the speed. What received less attention were the harder questions: Who would own it after launch? Who would maintain it? And what would happen when it produced an answer that was confidently wrong?
This is what vibe coding is about to expose across businesses. The companies that think the story is about software are going to lose to the companies that understand the story is about judgment.
The Real Trend Is Decision Compression
Andrej Karpathy coined the term "vibe coding" in early 2025 to describe an AI-assisted style of building software through natural-language prompting, often without close inspection of the underlying code. Google Cloud describes vibe coding as a software development practice that makes app building more accessible, especially for people with limited programming experience. Tools like Cursor, Replit, Lovable, Bolt, GitHub Copilot Workspace, v0 by Vercel and Claude Code have moved the practice from novelty to workplace reality with stunning speed.
All of that is true. None of it is the point.
The point is that vibe coding collapses the distance between idea and artifact from months to hours. When that distance collapses, every quality-control mechanism your organization developed over the last 30 years gets bypassed by default. Design review. Security review. Legal review. Brand review. The simple friction of having to convince an engineer your idea was worth building. That is a governance story, not a software story. It is happening at every level of the org chart simultaneously.
[Source]: Forbes
Google has a price for you. Proton found it. The company analyzed over 54,000 demographic profiles using 2025 ad auction data to see what advertisers pay to reach different Americans. The average American generates about $1,605 a year in advertising value. The median is $760. The gap between those two numbers tells the story. A small number of high-value users pull the average up. The business runs on outliers.
The spread is stark. A 35- to 44-year-old man in Bozeman, Montana — no children, desktop user, making high-value corporate searches — is worth an estimated $17,929 per year. An 18- to 24-year-old father in Fort Smith, Arkansas — Android phone, low-value searches — is worth $31.05. That is a 577x difference between two people using the same free service. Device matters. A desktop user is worth 4.9 times more than the same person on Android. An iPhone user is worth 2.7 times more than Android. Having children costs you roughly 17% of your ad value. Advertiser value peaks between ages 35 and 44. By 65, average value drops to $511.
Where you live sets a floor on your price. Local service providers — lawyers, real estate agents, financial planners — bid against each other for local clicks. The more competitive the local market, the higher the floor price for everyone in it. The top markets are Edmond, Oklahoma and Bozeman, Montana, followed by Naperville, Illinois, Santa Fe, New Mexico, and Durham, North Carolina. The least valuable markets are concentrated in the Rust Belt and Appalachia — Wheeling and Parkersburg in West Virginia, Toledo, Ohio, and Buffalo, New York — where lower median incomes and fewer competing advertisers mean less bidding pressure. Over a decade, the average American represents roughly $16,050 in ad value. The most monetized profiles approach $180,000. Most people would not hand a corporation that much money over a lifetime. But that is what the system collects.
-----
Google, while big, is only one internet advertiser - and all that collected advertising income actually comes from consumers of the goods and services being advertised, as a premium on the price of the products. One particular medical device I worked on cost $600 to make, and $14,400 to sell at a net price to the patient of $15,000 for the device and another $15,000 to the hospital for the implantation procedure. Yes, the company was operating at break-even, spending 24x what the physical device cost to make and deliver on nothing but sales and marketing - hoping that some day they could get those sales costs down... didn't happen during the 2 years I worked there.
Microsoft, long a symbol of American innovation, is now offering a voluntary early retirement program that targets thousands of its most seasoned U.S. employees. Framed as a generous opportunity for longtime workers, the move instead reveals a deeper corporate calculus: trimming payroll of experienced Americans to redirect resources toward artificial intelligence infrastructure and, likely, a younger, often less expensive workforce:
This is not mere cost-cutting in response to market pressures. It is a strategic thinning of the ranks amid hundreds of billions committed to AI development, at a time when the company has already shed thousands of jobs in recent years. By dangling buyouts before employees whose age plus years of service equal 70 or more—primarily those at senior director level and below—Microsoft aims to reduce its 125,000-strong U.S. workforce by up to 7 percent, or roughly 8,750 people, without the public backlash of outright layoffs.
The program, announced in an internal memo from Chief People Officer Amy Coleman, marks the first such voluntary retirement initiative in the company's 51-year history. Eligible workers will receive notification beginning May 7 and have 30 days to decide. While presented as support for those "considering their next chapter," the timing aligns precisely with Microsoft's voracious appetite for AI spending, projected near $100 billion in capital expenditures this year alone.
[...] Recent history underscores the trend. Microsoft has conducted multiple rounds of job cuts, even as it competes fiercely with Google and others in the AI race. Similar moves at Meta, which recently slashed 10 percent of its workforce to fund infrastructure, reveal an industry-wide willingness to sacrifice people for processors. The human element—wisdom forged through years of problem-solving—receives polite acknowledgment before being shown the door with a severance package and extended healthcare.
Previously: Tech Industry Lays Off Nearly 80,000 Employees in the First Quarter of 2026 (Almost 50% Due to AI)
I just ran across this while bringing up another Android phone:
It is linked from the F-Droid website:
125 days until lockdown.
Starting September 2026, a silent update, nonconsensually pushed by Google, will block every Android app whose developer hasn't registered with Google, signed their contract, paid up, and handed over government ID.
Every app and every device, worldwide, with no opt-out.
( I have an interest in developing an Android apk for using cellphones as an HMI for Arduinos. )
Starting September 2026, a silent update, nonconsensually pushed by Google, will block every Android app whose developer hasn't registered with Google, signed their contract, paid up, and handed over government ID.
Every app and every device, worldwide, with no opt-out.
In August 2025, Google announced a new requirement: starting September 2026, every Android app developer must register centrally with Google before their software can be installed on any device. Not just Play Store apps: all apps. This includes apps shared between friends, distributed through F-Droid, built by hobbyists for personal use. Independent developers, church and community groups, and hobbyists alike will all be frozen out of being able to develop and distribute their software.
Registration requires:
- Paying a fee to Google
- Agreeing to Google's Terms and Conditions
- Surrendering your government-issued identification
- Providing evidence of your private signing key
- Listing all current and all future application identifiers
If a developer does not comply, their apps get silently blocked on every Android device worldwide.
Continued here.
I thought you guys might like this ..
Somebody has some 'splainin' to do!
The founder of PocketOS has penned a social media post to warn others about the "systemic failures" of flagship AI and digital services providers. Jer Crane was inspired to write a public response after an AI coding agent deleted his firm's entire production database. The AI agent's misdemeanors were then hugely amplified by a cloud infrastructure provider's API wiping all backups after the main database was zapped. This tag team of digital trouble has wiped out months of consumer data essential to the firm's, and its customers, businesses.
[...] "Yesterday afternoon, an AI coding agent — Cursor running Anthropic's flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider," sums up the PocketOS boss. "It took 9 seconds."
[...] The PocketOS boss puts greater blame on Railway's architecture than on the deranged AI agent for the database's irretrievable destruction. Briefly, the cloud provider's API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and "wiping a volume deletes all backups." Crane also points out that CLI tokens have blanket permissions across environments.
It was also observed by the irate SaaS founder that Railway is actively promoting the use of AI-coding agents by its customers. Crane's use of an AI coding agent on the Railway platform wasn't exploring new frontiers, or wasn't supposed to be. Meanwhile, Crane has been provided no recovery solution, and Railway has apparently been hedging carefully regarding any such possibility.
[...] Thankfully, PocketOS had a full 3-month-old backup, which was restorable from, so the deletion gaps are all limited to the interim period.
There are lessons to be learned from mistakes, as usual. Crane bullet points five things that need to change as the AI industry scales faster than it builds a worthwhile safety architecture. Specifics he calls for include; stricter confirmations, scopable API tokens, proper backups, simple recovery procedures, and AI agents existing within proper guardrails.
In the meantime, please follow a thorough backup regimen and be careful out there. This isn't the first time we've seen an AI go rogue and start deleting important databases.
The founder of a software company has issued a public warning after an AI coding assistant erased his company's entire production database and all backups in just nine seconds.
Tom's Hardware reports that Jer Crane, founder of PocketOS, a platform serving car rental businesses, experienced what he describes as catastrophic failures when an AI coding agent deleted critical company data that took months to accumulate. The incident occurred when Cursor, an AI coding tool powered by Anthropic's Claude Opus 4.6, was performing what should have been a routine task in the company's staging environment.
According to Crane's detailed account posted on X, the AI agent encountered an obstacle and independently decided to resolve the issue by deleting the production database in Railway through an API call. Railway is the cloud infrastructure provider used by PocketOS, generally considered more user-friendly than major alternatives like Amazon Web Services. The entire deletion process took only nine seconds to complete.
The situation escalated beyond a simple database deletion due to Railway's infrastructure design. The cloud provider's system stored backups on the same volume as the source data, meaning when the AI agent deleted the primary database, all backup copies were simultaneously erased. This combination of the AI agent's unauthorized action and the infrastructure provider's architecture created what Crane characterizes as a recipe for disaster.
When Crane questioned the AI agent about its actions, he received a response that revealed the extent of the failure. The agent's explanation began with an acknowledgment of poor judgment. According to the verbatim response Crane shared, the AI stated it had guessed that deleting a staging volume through the API would only affect the staging environment without verifying this assumption or consulting Railway's documentation on how volumes function across different environments.
The AI agent's confession continued with an admission of multiple violations of its operational principles. It acknowledged running a destructive action without authorization, failing to understand the consequences before executing the command, and not reading the relevant documentation about Railway's volume behavior across environments. The agent recognized it should have either asked for permission first or found a non-destructive solution to the credential mismatch it encountered.
University of Oregon chemist Christopher Hendon loves his coffee—so much so that studying all the factors that go into creating the perfect cuppa constitutes a significant area of research for him. His latest project: discovering a novel means of measuring the flavor profile of coffee simply by sending an electrical current through a sample beverage. The results appear in a new paper published in the journal Nature Communications.
We've been following Hendon's work for several years now. For instance, in 2020, Hendon's lab helped devise a mathematical model for brewing the perfect cup of espresso, over and over, while minimizing waste. The flavors in espresso derive from roughly 2,000 different compounds that are extracted from the coffee grounds during brewing. So it can be challenging for baristas to reproduce the same perfect cup over and over again.
That's why Hendon and his colleagues built their model for a more easily measurable property known as the extraction yield (EY): the fraction of coffee that dissolves into the final beverage. That, in turn, depends on controlling water flow and pressure as the liquid percolates through the coffee grounds. The model is based on how lithium ions propagate through a battery's electrodes, similar to how caffeine molecules dissolve from coffee grounds.
[...] There are existing methods for collecting information on coffee's chemical composition, most notably liquid or gas chromatography combined with mass spectrometry. But these kinds of analyses are expensive and time-consuming, and predictive results are limited. There are also electrochemical techniques for measuring the concentration of caffeine and other molecules, but these have not taken into account coffee strength—a property determined by all the variables that go into preparing a cup of coffee, such as coffee and water masses, grind settings, water temperature and pressure, roast color, and so forth. That's the information likely to be most helpful to baristas.
The coffee industry typically uses a method for measuring the refractive index of coffee—i.e., how light bends as it travels through the liquid—to determine strength, but it doesn't capture the contribution of roast color to the overall flavor profile. So for this latest study, Hendon decided to focus on roast color and beverage strength, the two variables most likely to affect the sensory profile of the final cuppa.
His solution turned out to be quite simple. Hendon repurposed an electrochemical tool called a potentiostat , typically used to test battery and fuel cell performance. Hendon used the tool to measure how electricity interacted with the liquid. He found that this provided a better measurement of the flavor profile. He even tested it on four different samples of coffee beans and successfully identified the distinctive signature of a batch that had failed the roaster's quality-control process.
Granted, one's taste in coffee is fairly subjective, so Hendon's goal was not to achieve a "perfect" cup but to give baristas a simple tool to consistently reproduce flavor profiles more tailored to a given customer's taste. "It's an objective way to make a statement about what people like in a cup of coffee," said Hendon . "The reason you have an enjoyable cup of coffee is almost certainly that you have selected a coffee of a particular roast color and extracted it to a desired strength. Until now, we haven't been able to separate those variables. Now we can diagnose what gives rise to that delicious cup."
Journal Reference:
Bumbaugh, Robin E., Pennington, Doran L., Wehn, Lena C., et al. Direct electrochemical appraisal of black coffee quality using cyclic voltammetry [open], Nature Communications 2026 17:1 (DOI: 10.1038/s41467-026-71526-5)